Overall indices for assessing agreement among multiple raters

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Assessing agreement with multiple raters on correlated kappa statistics.

In clinical studies, it is often of interest to see the diagnostic agreement among clinicians on certain symptoms. Previous work has focused on the agreement between two clinicians under two different conditions or the agreement among multiple clinicians under one condition. Few have discussed the agreement study with a design where multiple clinicians examine the same group of patients under t...

متن کامل

A-Kappa: A measure of Agreement among Multiple Raters

Abstract: Medical data and biomedical studies are often imbalanced with a majority of observations coming from healthy or normal subjects. In the presence of such imbalances, agreement among multiple raters based on Fleiss’ Kappa (FK) produces counterintuitive results. Simulations suggest that the degree of FK’s misrepresentation of the observed agreement may be directly related to the degree o...

متن کامل

Overall concordance correlation coefficient for evaluating agreement among multiple observers.

Accurate and precise measurement is an important component of any proper study design. As elaborated by Lin (1989, Biometrics 45, 255-268), the concordance correlation coefficient (CCC) is more appropriate than other indices for measuring agreement when the variable of interest is continuous. However, this agreement index is defined in the context of comparing two fixed observers. In order to u...

متن کامل

Evidence levels for neuroradiology articles: low agreement among raters.

BACKGROUND AND PURPOSE Because evidence-based articles are difficult to recognize among the large volume of publications available, some journals have adopted evidence-based medicine criteria to classify their articles. Our purpose was to determine whether an evidence-based medicine classification used by a subspecialty-imaging journal allowed consistent categorization of levels of evidence amo...

متن کامل

Assessing Agreement between Multiple Raters with Missing Rating Information, Applied to Breast Cancer Tumour Grading

BACKGROUND We consider the problem of assessing inter-rater agreement when there are missing data and a large number of raters. Previous studies have shown only 'moderate' agreement between pathologists in grading breast cancer tumour specimens. We analyse a large but incomplete data-set consisting of 24,177 grades, on a discrete 1-3 scale, provided by 732 pathologists for 52 samples. METHODO...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Statistics in Medicine

سال: 2018

ISSN: 0277-6715,1097-0258

DOI: 10.1002/sim.7912